37 research outputs found

    Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd

    Full text link
    Object detection and 6D pose estimation in the crowd (scenes with multiple object instances, severe foreground occlusions and background distractors), has become an important problem in many rapidly evolving technological areas such as robotics and augmented reality. Single shot-based 6D pose estimators with manually designed features are still unable to tackle the above challenges, motivating the research towards unsupervised feature learning and next-best-view estimation. In this work, we present a complete framework for both single shot-based 6D object pose estimation and next-best-view prediction based on Hough Forests, the state of the art object pose estimator that performs classification and regression jointly. Rather than using manually designed features we a) propose an unsupervised feature learnt from depth-invariant patches using a Sparse Autoencoder and b) offer an extensive evaluation of various state of the art features. Furthermore, taking advantage of the clustering performed in the leaf nodes of Hough Forests, we learn to estimate the reduction of uncertainty in other views, formulating the problem of selecting the next-best-view. To further improve pose estimation, we propose an improved joint registration and hypotheses verification module as a final refinement step to reject false detections. We provide two additional challenging datasets inspired from realistic scenarios to extensively evaluate the state of the art and our framework. One is related to domestic environments and the other depicts a bin-picking scenario mostly found in industrial settings. We show that our framework significantly outperforms state of the art both on public and on our datasets.Comment: CVPR 2016 accepted paper, project page: http://www.iis.ee.ic.ac.uk/rkouskou/6D_NBV.htm

    Integration of 2D and 3D images for enhanced face authentication

    Get PDF
    This paper presents a complete face authentication system integrating 2D intensity and 3D range data, based on a low-cost, real-time structured light sensor. Novel algorithms are proposed that exploit depth data to achieve robust face detection, localization and authentication under conditions of background clutter, occlusion, face pose alteration and harsh illumination. The well known embedded hidden markov model technique for face authentication is applied to depth maps. A method for the enrichment of face databases with synthetically generated views depicting various head poses and illumination conditions is proposed. The performance of the proposed system is tested on an extensive face database of 3,000 images. Experimental results demonstrate significant gains resulting from the combined use of depth and intensity. 1

    A novel framework for retrieval and interactive visualization of multimodal data

    Get PDF
    With the abundance of multimedia in web databases and the increasing user need for content of many modalities, such as images, sounds, etc. , new methods for retrieval and visualization of multimodal media are required. In this paper, novel techniques for retrieval and visualization of multimodal data, i. e. documents consisting of many modalities, are proposed. A novel cross-modal retrieval framework is presented, in which the results of several unimodal retrieval systems are fused into a single multimodal list by the introduction of a cross-modal distance. For the presentation of the retrieved results, a multimodal visualization framework is also proposed, which extends existing unimodal similarity-based visualization methods for multimodal data. The similarity measure between two multimodal objects is defined as the weighted sum of unimodal similarities, with the weights determined via an interactive user feedback scheme. Experimental results show that the cross-modal framework outperforms unimodal and other multimodal approaches while the visualization framework enhances existing visualization methods by efficiently exploiting multimodality and user feedback

    Prior Knowledge Based Motion Model Representation

    Get PDF
    This paper presents a new approach for human walking modeling from monocular image sequences. A kinematics model and a walking motion model are introduced in order to exploit prior knowledge. The proposed technique consists of two steps. Initially, an efficient feature point selection and tracking approach is used to compute feature points' trajectories. Peaks and valleys of these trajectories are used to detect key frames-frames where both legs are in contact with the floor. Secondly, motion models associated with each joint are locally tuned by using those key frames. Differently than previous approaches, this tuning process is not performed at every frame, reducing CPU time. In addition, the movement's frequency is defined by the elapsed time between two consecutive key frames, which allows handling walking displacement at different speed. Experimental results with different video sequences are presented

    towards intelligent autonomous sorting of unclassified nuclear wastes

    Get PDF
    Sorting of old and mixed nuclear waste is an essential process in nuclear decommissioning operations. The main bottleneck is manual picking and separation of the materials using remotely operated arms, which is slow and error prone especially with small items. Automation of the process is therefore desirable. In the framework of the newly funded European project ECHORD++, experiment RadioRoSo, a pilot robotic cell is being developed and validated against industrial requirements on a range of sorting tasks. Industrial robots, custom gripper, vision feedback and new manipulation skills will be developed. This paper presents application context, cell layout and sorting approach

    I-SEARCH - a multimodal search engine based on rich unified content description (RUCoD)

    Get PDF
    International audienceIn this paper, we report on work around the I-SEARCH EU (FP7 ICT STREP) project whose objective is the development of a multimodal search engine. We present the project's objectives, and detail the achieved results, amongswhich a Rich Unified Content Description format
    corecore